This file accompanies the Scheuerell et al. paper describing a new method for estimating time varying community interactions (e.g., predation, competition) from multivariate time series data. The general model framework follows from the original MAR(1) model of Ives et al. (2003), and subsequent state-space versions of Ives and Dakos (2012). Specifically, the TVVARSS model is
\[\mathbf{y}_{i,t} = \mathbf{x}_{j,t} + \mathbf{v}_{i,t}\]
\[\mathbf{x}_{j,t} = \mathbf{B}_t \mathbf{x}_{j,t-1} + \mathbf{C} \mathbf{c}_{j,t} + \mathbf{w}_{j,t}\]
\[\mathbf{B}_t = \mathbf{B}_{t-1} + \mathbf{e}_{t}\]
The measured variates (in log-space) at station \(i\) and time \(t\) \(\left(\mathbf{y}_{i,t}\right)\) are observations of some true, but unknown state \(j\) at time \(t\) \(\left(\mathbf{x}_t\right)\). The states at time \(t\) are a function of intra- and inter-guild interactions \(\left(\mathbf{B}_t \mathbf{x}_{j,t-1} \right)\), and environmental covariates \(\left(\mathbf{C} \mathbf{c}_{j,t}\right)\). The interaction matrix \(\mathbf{B}_t\) follows a random walk.
When setting up the data below, it will be important to keep in mind the dimensions of the vectors and matrices for the equations above. Specifically, for \(m\) species/guilds at \(n\) sites/stations, \(q\) covariates, and \(T\) time points, we know:
\(\mathbf{y}_{i,t}\) is \(\left[ m \times 1 \right]\) and hence \(\mathbf{y}_{i}\) is \(\big[ m \times T \big]\);
\(\mathbf{x}_{j,t}\) is \(\left[ m \times 1 \right]\) and hence \(\mathbf{x}_{j}\) is \(\big[ m \times T \big]\);
\(\mathbf{B}_{t}\) is \(\big[ m \times m \big]\);
\(\mathbf{C}\) is \(\left[ m \times q \right]\);
\(\mathbf{c}_{j,t}\) is \(\left[ q \times 1 \right]\) and hence \(\mathbf{c}_{j}\) is \(\left[ q \times T \right]\).
Our analyses require several packages not installed with base R, so we being by installing them (if necessary) and then loading them.
if(!require("RCurl")) {
install.packages("RCurl")
library("RCurl")
}
if(!require("reshape2")) {
install.packages("reshape2")
library("reshape2")
}
if(!require("R2jags")) {
install.packages("R2jags")
library("R2jags")
}
We begin with the original data from Kenner et al. (2013), which are available from the Ecological Society of America Archives at
http://www.esapubs.org/archive/ecol/E094/244/
We will use the benthic plant, invertebrate, and fish data, plus the midwater fish data. Be patient as these files will take a little while to download.
# set URL
URL <- "http://www.esapubs.org/archive/ecol/E094/244/"
# get benthic algae/invert data
dat_bi <- read.csv(paste0(URL,"Benthic%20density%20raw%20data.csv"))
colnames(dat_bi) <- tolower(colnames(dat_bi))
# get benthic fish data
dat_bf <- read.csv(paste0(URL,"Benthic%20fish%20density%20raw%20data.csv"))
colnames(dat_bf) <- tolower(colnames(dat_bf))
# get midwater fish data
dat_mf <- read.csv(paste0(URL,"Midwater%20fish%20density%20raw%20data.csv"))
colnames(dat_mf) <- tolower(colnames(dat_mf))
For our purposes we want the mean density of each species by sampling occasion and location.
# benthic algae/invert data
benthos <- aggregate(density ~ speciescode + station + period, data=dat_bi, "mean")
# combine bottom & midwater fish data
fish <- rbind(dat_bf,dat_mf)
# include juveniles and adults together
fish[,"density"] <- fish[,"adultdensity"] + fish[,"juvdensity"]
# fish data
fish_2 <- aggregate(density ~ speciescode + station + period, data=fish, "mean")
# all data together
dat <- rbind(benthos,fish_2)
NOTES for this version
RCurl package to read from secure URL’s.We assign all of the species in the Kenner dataset to 1 of 16 guilds (see Table S? of the manuscript for a print version). We have saved the lookup table as a .csv file, which is available on the Github project site.
# set URL
URL <- "https://raw.githubusercontent.com/eric-ward/TVVARSS/master/"
token <- "?token=AE0I0KEXdUSM-xFT_esp3H6446JfyOqQks5WsQ-ywA%3D%3D"
guilds_file <- "species_sampled_lookup.csv"
# get LUT of guild names
guilds <- read.csv(textConnection(getURL(paste0(URL,guilds_file,token))))
colnames(guilds) <- tolower(colnames(guilds))
# assign guilds names
for(i in 1:dim(guilds)[1]) {
dat[dat[,"speciescode"] %in% guilds[i,"speciescode"],"guild"] <- guilds[i,"guild"]
}
There are two guilds that we do not want to include in our analysis because they are too rare in the dataset: 1) “large piscivores” (e.g., leopard shark, Triakis semifasciata), and 2) “pelagic piscivores” (i.e., jack mackerel, Trachurus symmetricus).
# spp to drop/eliminate
spp_out <- c("Piscivorous fishes - pelagic","Large piscivorous fishes")
# drop spp/guilds of no interest
dat <- dat[!(dat$guild %in% spp_out),]
# guild names
guild_names <- sort(as.vector(unique(guilds$guild)[!(unique(guilds$guild) %in% spp_out)]))
The original data were collected at 7 stations around San Nicolas, but we only model the dynamics of 6 of them. The seventh station, Sandy Cove, was added later in the study, is relatively deep compared to the other stations, and was surveyed with much less frequency. We assume that the data from the 6 stations are observations of 4 underlying “states”, or realizations of the San Nicolas community (see table below). In the North region, we treat station 1 as a single observation of one state. In the West region, we treat stations 2 and 3 as samples from the same state. In the South region, we model 2 different states. Stations 4 and 5 are observations of a third state, and station 6 is from a fourth state.
| Station | Name | Region | State |
|---|---|---|---|
| 1 | Nav Fac | North | 1 |
| 2 | West End 1 | West | 2 |
| 3 | West End 2 | West | 2 |
| 4 | East Dutch Harbor | South | 3 |
| 5 | West Dutch Harbor | South | 3 |
| 6 | Daytona | South | 4 |
So, we begin by dropping station 7.
# drop station 7
dat <- dat[dat$station != 7,]
Now we need to aggregate the data by guild and sampling station.
dat_agg <- aggregate(density ~ guild + station + period, data=dat, "sum")
We will treat 0’s in the aggregated data as NA’s for two reasons:
dat_agg[dat_agg$density==0,"density"] <- NA
We need to use log-density to meet model assumptions, and we want to de-mean the data so that we do not need to estimate the means within the model.
# log of density
dat_agg$ldens <- log(dat_agg$density)
# transform data to wide form (rows: time; cols: guilds x stations)
dat_agg_w <- dcast(dat_agg, period ~ guild + station, value.var="ldens")
# de-mean the data
dat_agg_w[,-1] <- scale(dat_agg_w[,-1],center=TRUE,scale=FALSE)
There are a few time periods with missing samples that we need to set to NA.
# get time periods with no samples (ie, NAs)
prd_miss <- seq(max(dat_agg_w[,"period"]))[!(seq(max(dat_agg_w[,"period"])) %in% dat_agg_w[,"period"])]
# insert NAs for missing dates
dat_miss <- cbind(prd_miss,matrix(NA,length(prd_miss),ncol(dat_agg_w)-1))
colnames(dat_miss) <- colnames(dat_agg_w)
# concatenate the 2 data frames
dat_agg_w <- rbind(dat_agg_w,dat_miss)
# re-order the data by time
dat_agg_w <- dat_agg_w[order(dat_agg_w[,"period"]),]
rownames(dat_agg_w) <- NULL
The data are now organized within one data frame, which is \(\left[T \times (m \times n) \right]\), but we want our data to be \(n\) different matrices that are each \(\left[m \times T \right]\).
# number of time points
TT <- dim(dat_agg_w)[1]
# number of stations
NN <- length(unique(dat$station))
# number of guilds
MM <- (dim(dat_agg_w)[2]-1)/NN
# create station-specific matrices of data
for(i in 1:NN) {
tmp <- t(dat_agg_w[,seq(2,by=NN,length.out=MM)+(i-1)])
rownames(tmp) <- guild_names
assign(paste0("YY_",i),tmp)
}
The last thing we need to do here is define a few scalars and indices to use within the JAGS model below.
M2 <- MM*MM
row_idx <- rep(seq(1,MM), MM)
BB_idx <- matrix(seq(1,M2),MM,MM)
BB_prior <- diag(M2)
BB_mean <- rep(0,M2)
BB_diag <- seq(1,M2,by=(MM+1)) # these are the indices of BB_vec for diagonal of BB_mat
BB_off <- seq(1,M2)[-BB_diag]
There are several external drivers known to affect food web dynamics in kelp forest ecosystems. In particular, we are interested in the potential roles of 1) predatory sea otters; 2) the El Niño - Southern Oscillation (ENSO) climate phenomenon; and 3) commercial urchin harvest.
The sea otter data come from multiple sources. NEED INFO HERE RE: EARLY SOURCE(S) …and years from 1995-2011 come from Kenner et al. (2013).
NOTE THAT THESE VALUES WILL BE REPLACED WITH NEW DATA
# set URL
URL <- "https://raw.githubusercontent.com/eric-ward/TVVARSS/master/"
token <- "?token=AE0I0EUPnZXeDpa9rfe68HuRJKHW1Iniks5Wr_aVwA%3D%3D"
otterFile <- "adult_otters_current.csv"
# get LUT of guild names
otters <- read.csv(textConnection(getURL(paste0(URL,otterFile,token))))
# convert to [3 x TT] matrix for JAGS; each row is different region
ottr <- t(otters[,c("north","south","west")])
# rescale to make comparable with other covariates
ottr_m <- mean(ottr)
ottr_s <- sqrt(var(as.vector(ottr)))
ottr <- diag(1/rep(ottr_s),nrow(ottr)) %*% (ottr - ottr_m)
We used the El Niño / Southern Oscillation (ENSO) index to capture large-scale environmental conditions around San Nicolas. Strong ENSO events are characterized by relatively warm water and intense winter storms that physically disturb benthic habitats. Specifically, we used the sea-surface temperatures (SST) from the ENSO 3.4 region of the tropical Pacific Ocean (for more information on ENSO, click here).
The ENSO data are recorded monthly, but the food web data were sampled twice per year (i.e., spring and autumn). Thus, we want to use ENSO signals indicative of the period preceding each of the spring and fall dates. Although those dates vary somewhat from year to year, we selected these months for each time period:
| Season | ENSO months |
|---|---|
| Spring | March, April, May |
| Autumn | August, September, October |
The SST data (and several other atmosphereic indices) are available from NOAA’s Climate Prediction Center. Let’s download the file and trim away the information we don’t need.
# first year of data
yr_first <- 1980
# last year of data
yr_last <- 2011
# first set of months
mon1 <- c(3,4,5)
# second set of months
mon2 <- c(8,9,10)
# get all ENSO data
nino <- read.table("http://www.cpc.ncep.noaa.gov/data/indices/ersst4.nino.mth.81-10.ascii", header=TRUE)
# trim data to correct years & index
nino34 <- nino[nino$YR>=yr_first & nino$YR<=yr_last, c("YR","MON","NINO3.4")]
# assign months to periods (1=Spring; 2=Autumn)
nino34[nino34$MON %in% mon1,"period"] <- 1
nino34[nino34$MON %in% mon2,"period"] <- 2
# ts of period means
anom34 <- aggregate(NINO3.4 ~ period + YR, nino34, mean)[,c("YR","period","NINO3.4")]
# convert to z-score (i.e., a temperature anomoly)
anom34$NINO3.4 <- scale(anom34$NINO3.4)
colnames(anom34)[c(1,3)] <- c("year","anom34")
# convert to [1 x TT] matrix for JAGS
enso <- matrix(anom34$anom34,nrow=1)
We obtained the commercial harvest (in metric tonnes) of sea urchins around San Nicolas Island from GET CONTACT INFO . The harvest data are also on the Github project site.
# set URL
URL <- "https://raw.githubusercontent.com/eric-ward/TVVARSS/master/"
token <- "?token=AE0I0Cg5QEMXImfUitSN4lYQLBvd27BTks5Wr-vvwA%3D%3D"
urchin_file <- "urchin_harvest_ts.csv"
# get LUT of guild names
urchins <- read.csv(textConnection(getURL(paste0(URL,urchin_file,token))))
# rescale & convert to [1 x TT] matrix for JAGS
harv <- matrix(scale(urchins$sum),nrow=1)
We need to specify the TVVARSS model in JAGS notation before we can fit it. Note that the code below is not written to be universally generic, but rather to emphasize the number of states, observations, and covariates in this specific application.
model <- cat("
model {
#--------
# PRIORS
#--------
# BB is interaction matrix
# precision matrix
BB_tau ~ dwish(BB_prior, M2);
BB_tau_off ~ dgamma(0.001,0.001);
BB_tau_diag ~ dgamma(0.001,0.001);
# initial X0, or state of nature, varies by state, but shared prior
X0_tau ~ dwish(BB_prior[1:MM,1:MM], MM); # prior precision matrix
# process variance is independent/unequal by region
# and independent/unequal across spp
for(i in 1:MM) {
for(j in 1:3) {
QQ_tau[i,j] ~ dgamma(0.01,0.01);
}
}
# Priors for covariate effects
for(i in 1:MM) {
CC_enso[i,1] ~ dnorm(0,0.001); # effect of enso, varies by spp
CC_ottr[i,1] ~ dnorm(0,0.001); # effect of otters, varies by spp
}
for(i in 1:(MM-1)) {
CC_harv[i,1] <- 0; # effect of urchin harvest on non-urchin
}
CC_harv[MM,1] ~ dnorm(0,0.001); # effect of urchin harvest on urchins
#------------
# LIKELIHOOD
#------------
# first time step
BB_vec[1:M2,1] ~ dmnorm(BB_mean,BB_tau); # interactions at time 1
# convert BB_vec to a matrix for initial time step
for(cols in 1:MM) {
BB_mat[1:MM,cols,1] <- BB_vec[BB_idx[1,cols]:BB_idx[MM,cols],1];
}
X_1[1:MM,1] ~ dmnorm(BB_mean[1:MM],X0_tau); # state 1
X_2[1:MM,1] ~ dmnorm(BB_mean[1:MM],X0_tau); # state 2
X_3[1:MM,1] ~ dmnorm(BB_mean[1:MM],X0_tau); # state 3
X_4[1:MM,1] ~ dmnorm(BB_mean[1:MM],X0_tau); # state 4
# time steps 2:TT
for(time in 2:TT) {
for(cols in 1:MM) {
# go from vec space -> matrix, but i think it's only way in jags
BB_mat[1:MM,cols,time] <- BB_vec[BB_idx[1,cols]:BB_idx[MM,cols],time];
}
# calculate predicted state vectors
mu_X_1[1:MM,time] <- BB_mat[1:MM,1:MM,time] %*% (X_1[1:MM,time-1]) +
CC_enso[1:MM,1] * enso[1,time-1] +
CC_ottr[1:MM,1] * ottr[2,time-1] +
CC_harv[1:MM,1] * harv[1,time-1];
mu_X_2[1:MM,time] <- BB_mat[1:MM,1:MM,time] %*% (X_2[1:MM,time-1]) +
CC_enso[1:MM,1] * enso[1,time-1] +
CC_ottr[1:MM,1] * ottr[1,time-1] +
CC_harv[1:MM,1] * harv[1,time-1];
mu_X_3[1:MM,time] <- BB_mat[1:MM,1:MM,time] %*% (X_3[1:MM,time-1]) +
CC_enso[1:MM,1] * enso[1,time-1] +
CC_ottr[1:MM,1] * ottr[3,time-1] +
CC_harv[1:MM,1] * harv[1,time-1];
mu_X_4[1:MM,time] <- BB_mat[1:MM,1:MM,time] %*% (X_4[1:MM,time-1]) +
CC_enso[1:MM,1] * enso[1,time-1] +
CC_ottr[1:MM,1] * ottr[3,time-1] +
CC_harv[1:MM,1] * harv[1,time-1];
# include process variation - normally distributed errors
# independent across spp and site
for(spp in 1:MM) {
X_1[spp,time] ~ dnorm(mu_X_1[spp,time], QQ_tau[spp,1]);
X_2[spp,time] ~ dnorm(mu_X_2[spp,time], QQ_tau[spp,2]);
X_3[spp,time] ~ dnorm(mu_X_3[spp,time], QQ_tau[spp,3]);
X_4[spp,time] ~ dnorm(mu_X_4[spp,time], QQ_tau[spp,3]);
}
# update BB
# off-diagonal elements of BB: interactions
for(i in 1:(M2-MM)) {
BB_vec[BB_off[i],time] ~ dnorm(BB_vec[BB_off[i],time-1], BB_tau_off);
}
# diagonal elements of BB: density dependence
for(i in 1:MM) {
BB_vec[BB_diag[i],time] ~ dnorm(BB_vec[BB_diag[i],time-1], BB_tau_diag);
}
} # end time loop
# observation / data model; assume RR = diagonal and unequal
for(i in 1:MM) {
RR_tau[i] ~ dgamma(0.001,0.001);
}
for(time in 1:TT) {
for(spp in 1:MM) {
YY_1[spp,time] ~ dnorm(X_1[spp,time],RR_tau[spp]);
YY_2[spp,time] ~ dnorm(X_2[spp,time],RR_tau[spp]);
YY_3[spp,time] ~ dnorm(X_2[spp,time],RR_tau[spp]);
YY_4[spp,time] ~ dnorm(X_3[spp,time],RR_tau[spp]);
YY_5[spp,time] ~ dnorm(X_3[spp,time],RR_tau[spp]);
YY_6[spp,time] ~ dnorm(X_4[spp,time],RR_tau[spp]);
}
}
} # end JAGS model description
", file = "TVVARSS.txt")
The last thing we need to do before fitting the model in JAGS is to specify
Please note that the following code will take a LONG time to run (~2 hours).
# 1. data and indices to pass to JAGS
jags_dat <- list("YY_1","YY_2","YY_3","YY_4","YY_5","YY_6",
"enso", "ottr", "harv",
"MM","M2","TT","BB_idx","BB_prior","BB_mean","BB_off","BB_diag")
# 2. model params/states for JAGS to return
jags_par <- c("BB_vec", "BB_tau_off", "BB_tau_diag", "CC_enso", "CC_ottr", "CC_harv",
"X_1", "X_2", "X_3", "X_4", "QQ_tau", "RR_tau")
# 3. MCMC control params
jags_mod <- list(data = jags_dat,
parameters.to.save = jags_par,
inits = NULL,
model.file = "TVVARSS.txt",
n.chains = as.integer(4),
n.burnin = as.integer(1.0e5),
n.thin = as.integer(50),
n.iter = as.integer(1.2e5),
DIC = TRUE)
# start timer
timer_start <- proc.time()
# fit the model in JAGS & store results
# jags_fit <- do.call(jags.parallel, jags_mod)
load(url("http://faculty.washington.edu/scheuerl/tmp_JAGS_fit.RData"))
# stop timer
run_time_in_min <- round(((proc.time()-timer_start)/60)["elapsed"], 0)
There are several diagnostic measures we can examine to evaluate the model fit. To facilitate this, we will first “attach” the JAGS model object to the workspace so we can easily access what are otherwise deeply embedded list objects.
attach.jags(jags_fit)
We’ll start by examining X-Y plots of the observed data versus the estimated true states for each of the 14 guilds.
par(mfrow = c(4,4), mgp=c(2,1,0), mai=c(0.4,0.4,0.3,0.05))
for(i in 1:MM) {
X_med <- apply(X_1[,i,], 2, quantile, 0.5)
xmin <- min(X_med, na.rm=TRUE)
xmax <- max(X_med, na.rm=TRUE)
ymin <- min(YY_1[i,], na.rm=TRUE)
ymax <- max(YY_1[i,], na.rm=TRUE)
plot(X_med, YY_1[i,], col="blue", pch=16, cex=0.8,
xlim=c(xmin,xmax), ylim=c(ymin,ymax),
main=guild_names[i], xlab="Estimated", ylab="Observed", cex.main=0.9)
abline(0,1,col="darkgray")
}
par(mfrow = c(4,4), mgp=c(2,1,0), mai=c(0.4,0.4,0.3,0.05))
for(i in 1:MM) {
X_med <- apply(X_1[,i,], 2, quantile, 0.5)
xmin <- min(X_med, na.rm=TRUE)
xmax <- max(X_med, na.rm=TRUE)
ymin <- min(c(YY_2[i,],YY_3[i,]), na.rm=TRUE)
ymax <- max(c(YY_2[i,],YY_3[i,]), na.rm=TRUE)
plot(X_med, YY_2[i,], col="blue", pch=16, cex=0.8,
xlim=c(xmin,xmax), ylim=c(ymin,ymax),
main=guild_names[i], xlab="Estimated", ylab="Observed", cex.main=0.9)
points(X_med, YY_3[i,], col = "darkred", pch=16, cex=0.8)
abline(0,1,col="darkgray")
}
par(mfrow = c(4,4), mgp=c(2,1,0), mai=c(0.4,0.4,0.3,0.05))
for(i in 1:MM) {
X_med <- apply(X_3[,i,], 2, quantile, 0.5)
xmin <- min(X_med, na.rm=TRUE)
xmax <- max(X_med, na.rm=TRUE)
ymin <- min(c(YY_4[i,],YY_5[i,]), na.rm=TRUE)
ymax <- max(c(YY_4[i,],YY_5[i,]), na.rm=TRUE)
plot(X_med, YY_4[i,], col="blue", pch=16, cex=0.8,
xlim=c(xmin,xmax), ylim=c(ymin,ymax),
main=guild_names[i], xlab="Estimated", ylab="Observed", cex.main=0.9)
points(X_med, YY_5[i,], col = "darkred", pch=16, cex=0.8)
abline(0,1,col="darkgray")
}
par(mfrow = c(4,4), mgp=c(2,1,0), mai=c(0.4,0.4,0.3,0.05))
for(i in 1:MM) {
X_med <- apply(X_4[,i,], 2, quantile, 0.5)
xmin <- min(X_med, na.rm=TRUE)
xmax <- max(X_med, na.rm=TRUE)
ymin <- min(YY_6[i,], na.rm=TRUE)
ymax <- max(YY_6[i,], na.rm=TRUE)
plot(X_med, YY_6[i,], col="blue", pch=16, cex=0.8,
xlim=c(xmin,xmax), ylim=c(ymin,ymax),
main=guild_names[i], xlab="Estimated", ylab="Observed", cex.main=0.9)
abline(0,1,col="darkgray")
}
We can also examine time series of the estimated states \(\pm\) 95% credible intervals with the observed data overlaid.
# get median for each state
median_X1 <- apply(X_1,c(2,3),median)
par(mfrow=c(4,4), mgp=c(2,0.5,0), mai=c(0.2,0.25,0.3,0.05))
for(i in 1:MM) {
lowerCI <- apply(X_1[,i,],2,quantile, 0.025)
upperCI <- apply(X_1[,i,],2,quantile, 0.975)
ymin <- min(c(lowerCI, YY_1[i,]), na.rm=T)
ymax <- max(c(upperCI, YY_1[i,]), na.rm=T)
plot(median_X1[i,], main = guild_names[i], ylim=c(ymin, ymax), type="n",
cex.main=0.9, cex.axis=0.9, xaxt="n")
axis(1,at=seq(1,61,20),labels=seq(1980,2010,10))
Xs <- seq(1,dim(X_1)[3])
polygon(c(Xs, rev(Xs)), c(lowerCI, rev(upperCI)), border=NA, col="grey")
lines(median_X1[i,], col="black", pch=16, cex=0.8)
points(YY_1[i,], col="blue", pch=16, cex=0.8)
}
# take median for each state
median_X2 <- apply(X_2,c(2,3),median)
par(mfrow=c(4,4), mgp=c(2,0.5,0), mai=c(0.2,0.25,0.3,0.05))
for(i in 1:MM) {
lowerCI <- apply(X_2[,i,],2,quantile, 0.025)
upperCI <- apply(X_2[,i,],2,quantile, 0.975)
ymin <- min(c(lowerCI, YY_2[i,], YY_3[i,]), na.rm=T)
ymax <- max(c(upperCI, YY_2[i,], YY_3[i,]), na.rm=T)
plot(median_X2[i,], main = guild_names[i], ylim=c(ymin, ymax), type="n",
cex.main=0.9, cex.axis=0.9, xaxt="n")
axis(1,at=seq(1,61,20),labels=seq(1980,2010,10))
Xs <- seq(1,dim(X_2)[3])
polygon(c(Xs, rev(Xs)), c(lowerCI, rev(upperCI)), border=NA, col="grey")
lines(median_X2[i,], col="black", pch=16, cex=0.8)
points(YY_2[i,], col="blue", pch=16, cex=0.8)
points(YY_3[i,], col="darkred", pch=16, cex=0.8)
}
# take median for each state
median_X3 <- apply(X_3,c(2,3),median)
par(mfrow=c(4,4), mgp=c(2,0.5,0), mai=c(0.2,0.25,0.3,0.05))
for(i in 1:MM) {
lowerCI <- apply(X_3[,i,],2,quantile, 0.025)
upperCI <- apply(X_3[,i,],2,quantile, 0.975)
ymin <- min(c(lowerCI, YY_4[i,], YY_5[i,]), na.rm=T)
ymax <- max(c(upperCI, YY_4[i,], YY_5[i,]), na.rm=T)
plot(median_X3[i,], main = guild_names[i], ylim=c(ymin, ymax), type="n",
cex.main=0.9, cex.axis=0.9, xaxt="n")
axis(1,at=seq(1,61,20),labels=seq(1980,2010,10))
Xs <- seq(1,dim(X_3)[3])
polygon(c(Xs, rev(Xs)), c(lowerCI, rev(upperCI)), border=NA, col="grey")
lines(median_X3[i,], col="black", pch=16, cex=0.8)
points(YY_4[i,], col="blue", pch=16, cex=0.8)
points(YY_5[i,], col="darkred", pch=16, cex=0.8)
}
# take median for each state
median_X4 <- apply(X_4,c(2,3),median)
par(mfrow=c(4,4), mgp=c(2,0.5,0), mai=c(0.2,0.25,0.3,0.05))
for(i in 1:MM) {
lowerCI <- apply(X_4[,i,],2,quantile, 0.025)
upperCI <- apply(X_4[,i,],2,quantile, 0.975)
ymin <- min(c(lowerCI, YY_6[i,]), na.rm=T)
ymax <- max(c(upperCI, YY_6[i,]), na.rm=T)
plot(median_X4[i,], main = guild_names[i], ylim=c(ymin, ymax), type="n",
cex.main=0.9, cex.axis=0.9, xaxt="n")
axis(1,at=seq(1,61,20),labels=seq(1980,2010,10))
Xs <- seq(1,dim(X_4)[3])
polygon(c(Xs, rev(Xs)), c(lowerCI, rev(upperCI)), border=NA, col="grey")
lines(median_X4[i,], col="black", pch=16, cex=0.8)
points(YY_6[i,], col="blue", pch=16, cex=0.8)
}
Now lets look at some of estimated parameters, similar to those in Scheuerell et al. First, we’ll look at the large matrix of time varying coefficients (B matrix). Elements on off-diagonals (green) represent inter-guild interactions like competition and predation. Elements on the diaganol (in blue) represent intra-guild density dependence.
# desired alpha for CIs
alpha <- 0.10
# short guild names for plots
guilds_SH <- c("Sm piscivores","Sheephead","Pred inverts",
"Planktivores","Sm invert-eaters","Cleaner fish",
"Omni inverts","Abalone","Herb fishes",
"Snails","Urchins","Limpets",
"Giant kelp","Under kelp")
# re-order guild names from upper to lower trophic levels
reorder <- c(11, 5, 9, 8, 10, 2, 7, 1, 4, 12, 14, 6, 3, 13)
# index for plotting
BB_plt <- c(BB_idx[reorder,reorder])
# setup plot region
layout(matrix(seq(M2), MM, MM, byrow=FALSE), widths=c(rep(1,MM),6))
par(mai=rep(0.025,4), omi=c(0,0.3,0.3,0))
# plot time series of interaction coefficients
for(i in 1:M2) {
pdat <- cbind(apply(BB_vec[,BB_plt[i],],2,quantile,alpha/2),
apply(BB_vec[,BB_plt[i],],2,mean),
apply(BB_vec[,BB_plt[i],],2,quantile,1-alpha/2))
matplot(seq(TT), pdat, type="n", xaxt="n",xlab="",yaxt="n",ylab="",ylim=c(-0.9,0.9))
abline(h=0, col="gray")
clr <- "green4"
fill_clr = rgb(0, 139, 0, alpha = 50, maxColorValue = 255)# based on green4
if(i %in% seq(1,M2,MM+1)) {
clr <- "blue"
fill_clr = rgb(0, 0, 255, alpha = 50, maxColorValue = 255)
}
polygon(c(seq(TT),rev(seq(TT))), c(pdat[,1], rev(pdat[,3])), col = fill_clr, border = NA)
lines(seq(TT), pdat[,2],lwd = 2.5, col = clr)
if(i <= MM) { mtext(guilds_SH[i], side=2, line=0.3,cex=0.3) }
if(i %in% BB_idx[1,]) { mtext(guilds_SH[which(BB_idx[1,]==i)], side=3, line=0.3,cex=0.3) }
}
Next, we can look at the estimated covariate effects. First, the effect of otters.
par(mfrow = c(4,4), mgp=c(2,0.5,0), mai=c(0.2,0.25,0.3,0.05))
brks <- seq(floor(min(CC_ottr)*40),ceiling(max(CC_ottr)*40))/40
for(i in 1:MM) {
hist(CC_ottr[,i], 20, col="grey", border=NA, freq=TRUE, breaks=brks,
main=guild_names[i], cex.main=0.9, cex.axis=0.9)
}
We can also look at the effects of ENSO on each guild.
par(mfrow = c(4,4), mgp=c(2,0.5,0), mai=c(0.2,0.25,0.3,0.05))
brks <- seq(floor(min(CC_enso)*40),ceiling(max(CC_enso)*40))/40
for(i in 1:MM) {
hist(CC_enso[,i], 20, col="grey", border=NA, freq=TRUE, breaks=brks,
main=guild_names[i], cex.main=0.9, cex.axis=0.9)
}
And finally, here is the effect of urchin harvest on urchins.
brks <- seq(floor(min(CC_harv[,MM])*40),ceiling(max(CC_harv[,MM])*40))/40
hist(CC_harv[,MM], 20, col="grey", border=NA, xlab="", breaks=brks,
main = "Effect of harvest on urchins", cex.main=0.9, cex.axis=0.9)